Parser evaluation using textual entailments
نویسندگان
چکیده
Parser Evaluation using Textual Entailments (PETE) is a shared task in the SemEval-2010 Evaluation Exercises on Semantic Evaluation. The task involves recognizing textual entailments based on syntactic information alone. PETE introduces a new parser evaluation scheme that is formalism independent, less prone to annotation error, and focused on semantically relevant distinctions. This paper describes the PETE task, gives an error analysis of the top-performing Cambridge system, and introduces a standard entailment module that can be used with any parser that outputs Stanford typed dependencies.
منابع مشابه
SemEval-2010 Task 12: Parser Evaluation Using Textual Entailments
Parser Evaluation using Textual Entailments (PETE) is a shared task in the SemEval-2010 Evaluation Exercises on Semantic Evaluation. The task involves recognizing textual entailments based on syntactic information alone. PETE introduces a new parser evaluation scheme that is formalism independent, less prone to annotation error, and focused on semantically relevant distinctions.
متن کاملTAG Parser Evaluation using Textual Entailments
Parser Evaluation using Textual Entailments (PETE, Yuret et al. (2013)) is a restricted textual entailment task designed to evaluate in a uniform manner parsers that produce different representations of syntactic structure. In PETE, entailments can be resolved using syntactic relations alone, and do not implicate lexical semantics or world knowledge. We evaluate TAG parsers on the PETE task, an...
متن کاملSCHWA: PETE Using CCG Dependencies with the C&C Parser
This paper describes the SCHWA system entered by the University of Sydney in SemEval 2010 Task 12 – Parser Evaluation using Textual Entailments (Yuret et al., 2010). Our system achieved an overall accuracy of 70% in the task evaluation. We used the C&C parser to build CCG dependency parses of the truth and hypothesis sentences. We then used partial match heuristics to determine whether the syst...
متن کاملMicrosoft Research at RTE-2: Syntactic Contributions in the Entailment Task: an implementation
The data set made available by the PASCAL Recognizing Textual Entailment Challenge provides a great opportunity to focus on the very difficult task of determining whether one sentence (the hypothesis, H) is entailed by another (the text, T). In RTE-1 (2005), we submitted an analysis of the test data with the purpose of isolating the set of T-H pairs whose categorization could be accurately pred...
متن کاملCambridge: Parser Evaluation Using Textual Entailment by Grammatical Relation Comparison
This paper describes the Cambridge submission to the SemEval-2010 Parser Evaluation using Textual Entailment (PETE) task. We used a simple definition of entailment, parsing both T and H with the C&C parser and checking whether the core grammatical relations (subject and object) produced for H were a subset of those for T. This simple system achieved the top score for the task out of those syste...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Language Resources and Evaluation
دوره 47 شماره
صفحات -
تاریخ انتشار 2013